16 research outputs found

    Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty

    Get PDF
    Although state-of-the-art deep neural network models are known to be robust to random perturbations, it was verified that these architectures are indeed quite vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy deep neural network models in the areas where security is a critical concern. In recent years, many research studies have been conducted to develop new attack methods and come up with new defense techniques that enable more robust and reliable models. In this study, we use the quantified epistemic uncertainty obtained from the model's final probability outputs, along with the model's own loss function, to generate more effective adversarial samples. And we propose a novel defense approach against attacks like Deepfool which result in adversarial samples located near the model's decision boundary. We have verified the effectiveness of our attack method on MNIST (Digit), MNIST (Fashion) and CIFAR-10 datasets. In our experiments, we showed that our proposed uncertainty-based reversal method achieved a worst case success rate of around 95% without compromising clean accuracy.Publisher's VersionWOS:00077742940000

    Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

    Get PDF
    Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.Publisher's VersionWOS:000757777400006PMID: 3522177

    AI-powered edge computing evolution for beyond 5G communication networks

    Get PDF
    Edge computing is a key enabling technology that is expected to play a crucial role in beyond 5G (B5G) and 6G communication networks. By bringing computation closer to where the data is generated, and leveraging Artificial Intelligence (AI) capabilities for advanced automation and orchestration, edge computing can enable a wide range of emerging applications with extreme requirements in terms of latency and computation, across multiple vertical domains. In this context, this paper first discusses the key technological challenges for the seamless integration of edge computing within B5G/6G and then presents a roadmap for the edge computing evolution, proposing a novel design approach for an open, intelligent, trustworthy, and distributed edge architecture.VERGE has received funding from the Smart Networks and Services Joint Undertaking (SNS JU) under the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101096034.Peer ReviewedPostprint (author's final draft

    Belirsizlik metriklerinin hasmane makine öğrenmesinde saldırı ve savunma amaçlı kullanılması

    No full text
    Text in English ; Abstract: English and TurkishIncludes bibliographical references (leaves 94-102)xv, 102 leavesDeep Neural Network (DNN) models are widely renowned for their resistance to random perturbations. However, researchers have found out that these models are indeed extremely vulnerable to deliberately crafted and seemingly imperceptible perturbations of the input, defined as adversarial samples. Adversarial attacks have the potential to substantially compromise the security of DNN-powered systems and posing high risks especially in the areas where security is a top priority. Numerous studies have been conducted in recent years to defend against these attacks and to develop more robust architectures resistant to adversarial threats. In this thesis study, we leverage the use of various uncertainty metrics obtained from MC-Dropout estimates of the model for developing new attack and defense ideas. On defense side, we propose a new adversarial detection mechanism and an uncertaintybased defense method to increase the robustness of DNN models against adversarial evasion attacks. On the attack side, we use the quantified epistemic uncertainty obtained from the model’s final probability outputs, along with the model’s own loss function, to generate effective adversarial samples. We’ve experimentally evaluated and verified the efficacy of our proposed approaches on standard computer vision datasets.Derin Sinir Ağları modelleri, yaygın olarak rastgele bozulmalara karşı dirençleri ile bilinir. Bununla birlikte, araştırmacılar, bu modellerin, karşıt (hasmane) örnekler olarak adlandırılan girdinin kasıtlı olarak hazırlanmış ve görünüşte algılanamaz bozulmalarına karşı gerçekten son derece savunmasız olduğunu keşfettiler. Bu gibi hasmane saldırılar, Derin Sinir Ağları tabanlı yapay zeka sistemlerinin güvenliğini önemli ölçüde tehlikeye atma potansiyeline sahiptir ve özellikle güvenliğin öncelikli olduğu alanlarda yüksek riskler oluşturur. Bu saldırılara karşı savunma yapmak ve hasmane tehditlere karşı daha dayanıklı mimariler geliştirmek için son yıllarda çok sayıda çalışma yapılmıştır. Bu tez çalışmasında, yeni saldırı ve savunma fikirleri geliştirmek için modelin Monte- Carlo Bırakma Örneklemesinden elde edilen çeşitli belirsizlik metriklerinin kullanımından yararlanıyoruz. Savunma tarafında, hasmane saldırılara karşı yapay sinir ağı modellerinin sağlamlığını artırmak için yeni bir tespit mekanizması ve belirsizliğe dayalı savunma yöntemi öneriyoruz. Saldırı tarafında, etkili hasmane örnekler oluşturmak için modelin kendi kayıp fonksiyonu ile birlikte modelin nihai olasılık çıktılarından elde edilen nicelleştirilmiş epistemik belirsizliği kullanıyoruz. Standart bilgisayarlı görü veri kümeleri üzerinde önerilen yaklaşımlarımızın etkinliğini deneysel olarak değerlendirdik ve doğruladık.INTRODUCTIONVulnerabilities of AI-driven systemsImportance of Uncertainty for AI-driven systemsProblem StatementMotivation for Using Uncertainty InformationMain Contributions of the Thesis DissertationOrganization of the Thesis DissertationADVERSARIAL MACHINE LEARNINGAdversarial AttacksFormal Definition of Adversarial SampleDistance MetricsAttacker ObjectiveCapability of the AttackerAdversarial Attack TypesFast-Gradient Sign MethodIterative Gradient Sign MethodProjected Gradient DescentJacobian-based Saliency Map Attack (JSMA)Carlini&Wagner AttackDeepfool AttackHopskipjump AttackUniversal Adversarial AttackAdversarial DefenseDefensive DistillationAdversarial TrainingMagnetDetection of Adversarial SamplesUNCERTAINTY IN MACHINE LEARNINGTypes of Uncertainty in Machine LearningEpistemic UncertaintyAleatoric UncertaintyScibilic UncertaintyQuantifying Uncertainty in Deep Neural NetworksQuantification of Epistemic Uncertainty via MC-Dropout SamplingQuantification of Aleatoric Uncertainty via MC-Dropout SamplingQuantification of Epistemic and Aleatoric Uncertainty via MCDropout SamplingMoment-Based Predictive Uncertainty QuantificationADVERSARIAL SAMPLE DETECTIONUncertainty QuantificationExplanatory Research on Uncertainty Quantification MethodsProposed Closeness MetricExplanatory Research on our Closeness MetricSummary of the AlgorithmResultsExperimental SetupExperimental ResultsFurther Results and DiscussionADVERSARIAL ATTACKApproachProposed Epistemic Uncertainty Based AttacksFast Gradient Sign Method (Uncertainty-Based)Basic Iterative Attack (BIM-A Uncertainty-Based)Basic Iterative Attack (BIM-A Hybrid Approach)Basic Iterative Attack (BIM-B Hybrid Approach)Visualizing Gradient Path for Uncertainty-Based AttacksVisualizing Uncertainty Under Different Attack VariantsSearch For a More Efficient Attack AlgorithmRectified Basic Iterative AttackAttacker’s CapabilityResultsExperimental SetupExperimental ResultsFurther Results and DiscussionADVERSARIAL DEFENSEApproachIntuition Behind Using Uncertainty-based Reversal ProcessUncertainty-Based Reversal OperationEnhanced Uncertainty-Based Reversal OperationThe Usage of Uncertainty-based ReversalThe Effect of Uncertainty-based ReversalVariants of the Enhanced Uncertainty-Based Reversal OperationHybrid Deployment OptionsVia Adversarial TrainingVia Defensive DistillationThe Effect on Clean Data PerformanceResultsPart-1Experimental SetupExperimental ResultsDiscussions and Results with Hybrid Approach (Adversarial Training)Part-2Discussions and Further ResultsCONCLUSION AND FUTURE WOR

    BOUN-ISIK participation: an unsupervised approach for the named entity normalization and relation extraction of Bacteria Biotopes

    No full text
    This paper presents our participation at the Bacteria Biotope Task of the BioNLP Shared Task 2019. Our participation includes two systems for the two subtasks of the Bacteria Biotope Task: the normalization of entities (BB-norm) and the identification of the relations between the entities given a biomedical text (BB-rel). For the normalization of entities, we utilized word embeddings and syntactic re-ranking. For the relation extraction task, pre-defined rules are used. Although both approaches are unsupervised, in the sense that they do not need any labeled data, they achieved promising results. Especially, for the BB-norm task, the results have shown that the proposed method performs as good as deep learning based methods, which require labeled data.Publisher's Versio

    TENET: a new hybrid network architecture for adversarial defense

    No full text
    This work was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) through the 1515 Frontier Research and Development Laboratories Support Program under Project 5169902, and has been partly funded by the European Union’s Horizon Europe research and innovation programme and Smart Networks and Services Joint Undertaking (SNS JU) under Grant Agreement No: 101096034 (VERGE Project).Deep neural network (DNN) models are widely renowned for their resistance to random perturbations. However, researchers have found out that these models are indeed extremely vulnerable to deliberately crafted and seemingly imperceptible perturbations of the input, referred to as adversarial examples. Adversarial attacks have the potential to substantially compromise the security of DNN-powered systems and posing high risks especially in the areas where security is a top priority. Numerous studies have been conducted in recent years to defend against these attacks and to develop more robust architectures resistant to adversarial threats. In this study, we propose a new architecture and enhance a recently proposed technique by which we can restore adversarial samples back to their original class manifold. We leverage the use of several uncertainty metrics obtained from Monte Carlo dropout (MC Dropout) estimates of the model together with the model’s own loss function and combine them with the use of defensive distillation technique to defend against these attacks. We have experimentally evaluated and verified the efficacy of our approach on MNIST (Digit), MNIST (Fashion) and CIFAR10 datasets. In our experiments, we showed that our proposed method reduces the attack’s success rate lower than 5% without compromising clean accuracy.1515 Frontier Research and Development Laboratories Support ProgramEuropean Union’s Horizon Europe research and innovation programmeTürkiye Bilimsel ve Teknolojik Araştırma KurumuPublisher's Versio

    Closeness and uncertainty aware adversarial examples detection in adversarial machine learning

    No full text
    While deep learning models are thought to be resistant to random perturbations, it has been demonstrated that these architectures are vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy Deep Neural Network (DNN) models in security-critical areas. Recently, many research studies have been conducted to develop defense techniques enabling more robust models. In this paper, we target detecting adversarial samples by differentiating them from their clean equivalents. We investigate various metrics for detecting adversarial samples. We first leverage moment-based predictive uncertainty estimates of DNN classifiers derived through Monte-Carlo (MC) Dropout Sampling. We also introduce a new method that operates in the subspace of deep features obtained by the model. We verified the effectiveness of our approach on different datasets. Our experiments show that these approaches complement each other, and combined usage of all metrics yields 99 % ROC-AUC adversarial detection score for well-known attack algorithms.Publisher's VersionScience Citation Index Expanded (SCI-EXPANDED)WOS:000798073500009Affiliation ID: 6001047

    Unreasonable effectiveness of last hidden layer activations for adversarial robustness

    No full text
    In standard Deep Neural Network (DNN) based classifiers, the general convention is to omit the activation function in the last (output) layer and directly apply the softmax function on the logits to get the probability scores of each class. In this type of architectures, the loss value of the classifier against any output class is directly proportional to the difference between the final probability score and the label value of the associated class. Standard White-box adversarial evasion attacks, whether targeted or untargeted, mainly try to exploit the gradient of the model loss function to craft adversarial samples and fool the model. In this study, we show both mathematically and experimentally that using some widely known activation functions in the output layer of the model with high temperature values has the effect of zeroing out the gradients for both targeted and untargeted attack cases, preventing attackers from exploiting the model's loss function to craft adversarial samples. We've experimentally verified the efficacy of our approach on MNIST (Digit), CIFAR10 datasets. Detailed experiments confirmed that our approach substantially improves robustness against gradient-based targeted and untargeted attack threats. And, we showed that the increased non-linearity at the output layer has some ad-ditional benefits against some other attack methods like Deepfool attack.Publisher's VersionWOS:00085598330016

    Hünerli ellerin mirası : Devrek Bastonu

    No full text
    Ankara : İhsan Doğramacı Bilkent Üniversitesi İktisadi, İdari ve Sosyal Bilimler Fakültesi, Tarih Bölümü, 2016.This work is a student project of the The Department of History, Faculty of Economics, Administrative and Social Sciences, İhsan Doğramacı Bilkent University.by Pamuk, Fatih
    corecore